Multilingual Pretrained Language Models (MPLMs) have shown their strong multilinguality in recent empirical cross-lingual transfer studies. In this paper, we propose the Prompts Augmented by Retrieval Crosslingually (PARC) pipeline to improve the zero-shot performance on low-resource languages (LRLs) by augmenting the context with semantically similar sentences retrieved from a high-resource language (HRL) as prompts. PARC improves the zero-shot performance on three downstream tasks (binary sentiment classification, topic categorization and natural language inference) with multilingual parallel test sets across 10 LRLs covering 6 language families in both unlabeled settings (+5.1%) and labeled settings (+16.3%). PARC-labeled also outperforms the finetuning baseline by 3.7%. We find a significant positive correlation between cross-lingual transfer performance on one side, and the similarity between the high- and low-resource languages as well as the amount of low-resource pretraining data on the other side. A robustness analysis suggests that PARC has the potential to achieve even stronger performance with more powerful MPLMs.
translated by 谷歌翻译
RNA结构的确定和预测可以促进靶向RNA的药物开发和可用的共性元素设计。但是,由于RNA的固有结构灵活性,所有三种主流结构测定方法(X射线晶体学,NMR和Cryo-EM)在解决RNA结构时会遇到挑战,这导致已解决的RNA结构的稀缺性。计算预测方法作为实验技术的补充。但是,\ textit {de从头}的方法都不基于深度学习,因为可用的结构太少。取而代之的是,他们中的大多数采用了耗时的采样策略,而且它们的性能似乎达到了高原。在这项工作中,我们开发了第一种端到端的深度学习方法E2FOLD-3D,以准确执行\ textit {de de novo} RNA结构预测。提出了几个新的组件来克服数据稀缺性,例如完全不同的端到端管道,二级结构辅助自我鉴定和参数有效的骨干配方。此类设计在独立的,非重叠的RNA拼图测试数据集上进行了验证,并达到平均sub-4 \ aa {}根平方偏差,与最先进的方法相比,它表现出了优越的性能。有趣的是,它在预测RNA复杂结构时也可以取得令人鼓舞的结果,这是先前系统无法完成的壮举。当E2FOLD-3D与实验技术耦合时,RNA结构预测场可以大大提高。
translated by 谷歌翻译
语言模型既展示了定量的改进,又展示了新的定性功能,随着规模的增加。尽管它们具有潜在的变革性影响,但这些新能力的特征却很差。为了为未来的研究提供信息,为破坏性的新模型能力做准备,并改善社会有害的效果,至关重要的是,我们必须了解目前和近乎未来的能力和语言模型的局限性。为了应对这一挑战,我们介绍了超越模仿游戏基准(Big Bench)。 Big Bench目前由204个任务组成,由132家机构的442位作者贡献。任务主题是多样的,从语言学,儿童发展,数学,常识性推理,生物学,物理学,社会偏见,软件开发等等。 Big-Bench专注于被认为超出当前语言模型的功能的任务。我们评估了OpenAI的GPT型号,Google内部密集变压器体系结构和大型基础上的开关稀疏变压器的行为,跨越了数百万到数十亿个参数。此外,一个人类专家评估者团队执行了所有任务,以提供强大的基准。研究结果包括:模型性能和校准都随规模改善,但绝对的术语(以及与评估者的性能相比);在模型类中的性能非常相似,尽管带有稀疏性。逐渐和预测的任务通常涉及大量知识或记忆成分,而在临界规模上表现出“突破性”行为的任务通常涉及多个步骤或组成部分或脆性指标;社交偏见通常会随着含糊不清的环境而随着规模而增加,但这可以通过提示来改善。
translated by 谷歌翻译
批量归一化(BN)广泛用于现代神经网络,已被证明代表与域相关知识,因此对于跨域任务(如无监督域适应(UDA))无效。现有的BN变体方法在归一化模块中相同信道中的源和目标域知识。然而,跨域跨域的相应通道的特征之间的错位通常导致子最佳的可转换性。在本文中,我们利用跨域关系并提出了一种新颖的归一化方法,互惠归一化(RN)。具体地,RN首先呈现互易补偿(RC)模块,用于基于跨域频道明智的相关性在两个域中获取每个信道的补偿。然后,RN开发互易聚合(RA)模块,以便以其跨域补偿组件自适应地聚合特征。作为BN的替代方案,RN更适合于UDA问题并且可以容易地集成到流行的域适应方法中。实验表明,所提出的RN优于现有的正常化对应物,通过大幅度,并有助于最先进的适应方法实现更好的结果。源代码可在https://github.com/openning07/reciprocal-normalization-for-da上找到。
translated by 谷歌翻译
结构化修剪是一种常用的技术,用于将深神经网络(DNN)部署到资源受限的设备上。但是,现有的修剪方法通常是启发式,任务指定的,并且需要额外的微调过程。为了克服这些限制,我们提出了一个框架,将DNN压缩成纤薄的架构,具有竞争性表现,并且仅通过列车 - 一次(OTO)减少重大拖车。 OTO包含两个键:(i)我们将DNN的参数分区为零不变组,使我们能够修剪零组而不影响输出; (ii)促进零群,我们制定了结构性稀疏优化问题,提出了一种新颖的优化算法,半空间随机投影梯度(HSPG),以解决它,这优于组稀疏性探索的标准近端方法和保持可比的收敛性。为了展示OTO的有效性,我们从划痕上同时培训和压缩全模型,而无需微调推理加速和参数减少,并且在CIFAR10的VGG16实现最先进的结果,为CIFAR10和Squad的BERT为BERT竞争结果在resnet50上为想象成。源代码可在https://github.com/tianyic/only_train_once上获得。
translated by 谷歌翻译
遇到错误的损耗压缩正成为必不可少的技术,即当今科学项目的成功,并在模拟或仪器数据获取过程中产生了大量数据。它不仅可以显着减少数据大小,而且还可以基于用户指定的错误界限控制压缩错误。自动编码器(AE)模型已被广泛用于图像压缩中,但是很少有基于AE的压缩方法支持遇到错误的功能,这是科学应用所要求的。为了解决这个问题,我们使用卷积自动编码器探索以改善科学数据的错误损失压缩,并提供以下三个关键贡献。 (1)我们对各种自动编码器模型的特性进行了深入的研究,并根据SZ模型开发了基于错误的自动编码器的框架。 (2)我们在设计的基于AE的错误压缩框架中优化了主要阶段的压缩质量,并微调块大小和潜在尺寸,并优化了潜在向量的压缩效率。 (3)我们使用五个现实世界的科学数据集评估了我们提出的解决方案,并将其与其他六项相关作品进行了比较。实验表明,我们的解决方案在测试中的所有压缩机中表现出非常具有竞争性的压缩质量。从绝对的角度来看,与SZ2.1和ZFP相比,在高压比的情况下,它可以获得更好的压缩质量(压缩率和相同数据失真的100%〜800%提高)。
translated by 谷歌翻译
This paper focuses on designing efficient models with low parameters and FLOPs for dense predictions. Even though CNN-based lightweight methods have achieved stunning results after years of research, trading-off model accuracy and constrained resources still need further improvements. This work rethinks the essential unity of efficient Inverted Residual Block in MobileNetv2 and effective Transformer in ViT, inductively abstracting a general concept of Meta-Mobile Block, and we argue that the specific instantiation is very important to model performance though sharing the same framework. Motivated by this phenomenon, we deduce a simple yet efficient modern \textbf{I}nverted \textbf{R}esidual \textbf{M}obile \textbf{B}lock (iRMB) for mobile applications, which absorbs CNN-like efficiency to model short-distance dependency and Transformer-like dynamic modeling capability to learn long-distance interactions. Furthermore, we design a ResNet-like 4-phase \textbf{E}fficient \textbf{MO}del (EMO) based only on a series of iRMBs for dense applications. Massive experiments on ImageNet-1K, COCO2017, and ADE20K benchmarks demonstrate the superiority of our EMO over state-of-the-art methods, \eg, our EMO-1M/2M/5M achieve 71.5, 75.1, and 78.4 Top-1 that surpass \textbf{SoTA} CNN-/Transformer-based models, while trading-off the model accuracy and efficiency well.
translated by 谷歌翻译
A further understanding of cause and effect within observational data is critical across many domains, such as economics, health care, public policy, web mining, online advertising, and marketing campaigns. Although significant advances have been made to overcome the challenges in causal effect estimation with observational data, such as missing counterfactual outcomes and selection bias between treatment and control groups, the existing methods mainly focus on source-specific and stationary observational data. Such learning strategies assume that all observational data are already available during the training phase and from only one source. This practical concern of accessibility is ubiquitous in various academic and industrial applications. That's what it boiled down to: in the era of big data, we face new challenges in causal inference with observational data, i.e., the extensibility for incrementally available observational data, the adaptability for extra domain adaptation problem except for the imbalance between treatment and control groups, and the accessibility for an enormous amount of data. In this position paper, we formally define the problem of continual treatment effect estimation, describe its research challenges, and then present possible solutions to this problem. Moreover, we will discuss future research directions on this topic.
translated by 谷歌翻译
Decompilation aims to transform a low-level program language (LPL) (eg., binary file) into its functionally-equivalent high-level program language (HPL) (e.g., C/C++). It is a core technology in software security, especially in vulnerability discovery and malware analysis. In recent years, with the successful application of neural machine translation (NMT) models in natural language processing (NLP), researchers have tried to build neural decompilers by borrowing the idea of NMT. They formulate the decompilation process as a translation problem between LPL and HPL, aiming to reduce the human cost required to develop decompilation tools and improve their generalizability. However, state-of-the-art learning-based decompilers do not cope well with compiler-optimized binaries. Since real-world binaries are mostly compiler-optimized, decompilers that do not consider optimized binaries have limited practical significance. In this paper, we propose a novel learning-based approach named NeurDP, that targets compiler-optimized binaries. NeurDP uses a graph neural network (GNN) model to convert LPL to an intermediate representation (IR), which bridges the gap between source code and optimized binary. We also design an Optimized Translation Unit (OTU) to split functions into smaller code fragments for better translation performance. Evaluation results on datasets containing various types of statements show that NeurDP can decompile optimized binaries with 45.21% higher accuracy than state-of-the-art neural decompilation frameworks.
translated by 谷歌翻译
Image Virtual try-on aims at replacing the cloth on a personal image with a garment image (in-shop clothes), which has attracted increasing attention from the multimedia and computer vision communities. Prior methods successfully preserve the character of clothing images, however, occlusion remains a pernicious effect for realistic virtual try-on. In this work, we first present a comprehensive analysis of the occlusions and categorize them into two aspects: i) Inherent-Occlusion: the ghost of the former cloth still exists in the try-on image; ii) Acquired-Occlusion: the target cloth warps to the unreasonable body part. Based on the in-depth analysis, we find that the occlusions can be simulated by a novel semantically-guided mixup module, which can generate semantic-specific occluded images that work together with the try-on images to facilitate training a de-occlusion try-on (DOC-VTON) framework. Specifically, DOC-VTON first conducts a sharpened semantic parsing on the try-on person. Aided by semantics guidance and pose prior, various complexities of texture are selectively blending with human parts in a copy-and-paste manner. Then, the Generative Module (GM) is utilized to take charge of synthesizing the final try-on image and learning to de-occlusion jointly. In comparison to the state-of-the-art methods, DOC-VTON achieves better perceptual quality by reducing occlusion effects.
translated by 谷歌翻译